This replaces an early version of this post that appeared in June 2018.
Updated 5-15-24.
The Basics about AI
• What is AI? Everything you need to know about Artificial Intelligence (Nick Heath, ZDNet, 2-2-18) An executive guide to artificial intelligence, from machine learning and general AI to neural networks.
• What is an Internet bot? (Wikipedia) An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks over the Internet, usually with the intent to imitate human activity on the Internet, such as messaging, on a large scale.
• What is a bot: types and functions (Digital Guide IONOS UK, 11-16-21) What is a bot, what functions can it perform, and what does its structure consist of? Learn about Rule-based bots and self-learning bots, the different types of good bots, the different types of malware bots, and how they work. What types of attacks can botnets perform?
• ChatGPT (AI) This chatbot launched by OpenAI in November 2022 is being used to write novels, among other things. It has a problem with factual accuracy. See also section on this website on ChatGPT (AI)
Writers/journalists/creators and artificial intelligence and issues like copyright protection, plagiarism, flaws and inaccuracies, and how frank creators must be about using AI
• FAQs on the Authors Guild’s Positions and Advocacy Around Generative AI
• A crash course for journalists on AI and machine learning (Video, 51 min., International Journalism Festival, 4-7-22)
• Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in need (Casey Ross and Bob Herman, Stat Investigation, 3-13-23, a Pulitzer finalist for a series, For exposing how UnitedHealth Group, the nation's largest health insurer, used an unregulated algorithm to override clinicians' judgments and deny care, highlighting the dangers of AI use in medicine. Read the full series.
• CNET Is Quietly Publishing Entire Articles Generated By AI (Frank Landymore, The Byte, 1-15-23) "This article was generated using automation technology," reads a dropdown description.The articles are published under the unassuming appellation of "CNET Money Staff," and encompass topics like "Should You Break an Early CD for a Better Rate?" or "What is Zelle and How Does It Work?" That byline obviously does not paint the full picture, and so your average reader visiting the site likely would have no idea that what they're reading is AI-generated.
(H/T to Jon Christian for links to this and next four pieces)
• Google Is Using A.I. to Answer Your Health Questions. Should You Trust It? (Talya Minsberg, NY Times, 5-31-24) Experts say the new feature may offer dubious advice in response to personal health queries.
• CNET's Article-Writing AI Is Already Publishing Very Dumb Errors (Jon Christian, The Byte, Futurism,1-29-23) CNET is now letting an AI write articles for its site. The problem? It's kind of a moron.
• Sports Illustrated Published Articles by Fake, AI-Generated Writers ( Maggie Harrison Dupré, Futurism, 11-27-23) We asked them about it — and they deleted everything.
• CNET's AI Journalist Appears to Have Committed Extensive Plagiarism (Jon Christian, The Byte, Futurism 1-23-23) CNET's AI-written articles aren't just riddled with errors. They also appear to be substantially plagiarized.
• BuzzFeed Is Quietly Publishing Whole AI-Generated Articles, Not Just Quizzes (Noor Al-Sibai and Jon Christian, Futurism, 3-30-23) These read like a proof of concept for replacing human writers--lots of repetition of pet phrases.
• The AI is eating itself (Casey Newton, Platformer, 6-27-23) Boy, is this post packed with info and insights. The third paragraph alone kept me online for an extra half-hour, following links to more good reading.
• The AI takeover of Google Search starts now (David Pierce, The Verge, 5-10-23) Google is moving slowly and carefully to make AI happen. Maybe too slowly and too carefully for some people. But if you opt in, a whole new search experience awaits.
• Google Rolls Back A.I. Search Feature After Flubs and Flaws (Nico Grant, NY Times, 6-1-24) Google appears to have turned off its new A.I. Overviews for a number of searches as it works to minimize errors.
• AI is killing the old web, and the new web struggles to be born (James Vincent, The Verge, 6-26-23) Generative AI models are changing the economy of the web, making it cheaper to generate lower-quality content. We’re just beginning to see the effects of these changes.
• New Tool Could Poison DALL-E and Other AI to Help Artists (Josh Hendrickson, PC Mag, 10-27-23) Researchers from the University of Chicago introduce a new tool, dubbed Nightshade, that can 'poison' AI and ruin its data set, leading it to generate inaccurate results.
---This new data poisoning tool lets artists fight back against generative AI (Melissa Heikkilä, MIT Technology Review, 10-23-23) The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models.
• Godfathers of AI Have a New Warning: Get a Handle on the Tech Before It's Too Late (Joe Hindy, PC Mag, 10-24-23) Two dozen experts warn that 'AI systems could rapidly come to outperform humans in an increasing number of tasks [and] pose a range of societal-scale risks.'
• How AP Investigated the Global Impacts of AI (Garance Burke, Pulitzer Center, 6-21-23) "When my editor Ron Nixon and I realized that too few journalists had gotten trained on how these complex statistical models work, we devised internal workshops to build capacity in AI accountability reporting....No surprise, FOIA and its equivalents are an imperfect tool and rarely yield raw code. Little transparency about the use of AI tools by government agencies can mean public knowledge is severely restricted, even if records are disclosed.Viewing predictive and surveillance tools in isolation doesn’t capture their full global influence.The purchase and implementation of such technologies isn’t necessarily centralized. Individual state and local agencies may use a surveillance or predictive tool on a free trial basis and never sign a contract. And even if federal agencies license a tool intending to implement it nationwide, that isn’t always rolled out the same way in each jurisdiction."
• AI is being used to generate whole spam sites (James Vincent, The Verge, 5-2-23) A report identified 49 sites that use AI tools like ChatGPT to generate cheap and unreliable content. Experts warn the low costs of producing such text incentivizes the creation of these sites.
• The semiautomated social network is coming (James Vincent, The Verge, 3-10-23) LinkedIn announced last week it’s using AI to help write posts for users to chat about. Snap has created its own chatbot, and Meta is working on AI ‘personas.’ It seems future social networks will be increasingly augmented by AI.
• AI Art for Authors: Which Program to Use (Jason Hamilton, Kindlepreneur, 12-9-22) There are dozens of AI art tools out there, many with unique specialties. But most would agree that three stand up above the rest:
Midjourney
Dall-E 2
Stable Diffusion.
Hamilton discusses how to access them, what they cost, how they can be useful, and why he recommends them (or not, and what for, illustrated), with a final section on AI art's copyright problems: Are they copying exist art on the collage principle (a little here, a little there), or are they facing legal and copyright problems?
• Artificial Labor (Ed Zitron's Where's Your Ed At, 5-12-23) With the 2023 Writers Guild of America strike, "we are entering a historical battle between actual labor – those who create value in organizations and the world itself – and the petty executive titans that believe that there are no true value creators in society, only “ideas people” and those interchangeable units who carry out their whims...The television and film industries are controlled by exceedingly rich executives that view entertainment as something that can (and should) be commoditized and traded, rather than fostered and created by human beings. While dialogue eventually has to be performed by a human being, the Alliance of Motion Picture and Television Producers clearly views writing (and writers) as more of a fuel that can be used to create products rather than something unique or special....entertainment’s elites very clearly want to be able to use artificial intelligence to write content."
• The Fanfic Sex Trope That Caught a Plundering AI Red-Handed (Rose Eveleth, Wired, 5-15-23) Sudowrite, a tool that uses OpenAI’s GPT-3, was found to have understood a sexual act known only to a specific online community of Omegaverse writers. The data set that was used to train most (all?) text-generative AI includes sex acts found only in the raunchiest of fanfiction. "What if your work exists in a kind of in-between space—not work that you make a living doing, but still something you spent hours crafting, in a community that you care deeply about? And what if, within that community, there was a specific sex trope that would inadvertently unmask how models like ChatGPT scrape the web—and how that scraping impacts the writers who created it. (H/T Nate Hoffelder, Morning Coffee)
• AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit (James Vincent, The Verge, 1-16-23) The suit claims generative AI art tools violate copyright law by scraping artists’ work from the web without their consent. Butterick and Saveri are currently suing Microsoft, GitHub, and OpenAI in a similar case involving the AI programming model CoPilot, which is trained on lines of code collected from the web.
• The lawsuit that could rewrite the rules of AI copyright (James Vincent, The Verge, 11-8-22) Microsoft, its subsidiary GitHub, and its business partner OpenAI have been targeted in a proposed class action lawsuit alleging that the companies’ creation of AI-powered coding assistant GitHub Copilot relies on ---“software piracy on an unprecedented scale.”
---"Someone comes along and says, 'Let's socialize the costs and privatize the profits.'"
---“This is the first class-action case in the US challenging the training and output of AI systems. It will not be the last.”
• The scary truth about AI copyright is nobody knows what will happen next (James Vincent, The Verge, 11-15-22) The last year has seen a boom in AI models that create art, music, and code by learning from others’ work. But as these tools become more prominent, unanswered legal questions could shape the future of the field.
`
• Wendy’s to test AI chatbot that takes your drive-thru order (St. Louis-Post Dispatch) (Erum Salam, The Guardian, 5-10-23) 'The Guardian' reports that Wendy's is ready to roll out an artificial-intelligence-powered chatbot capable of taking customers' orders. Pilot program ‘seeks to take the complexity [the humans] out of the ordering process’
• In a Reminder of AI's Limits, ChatGPT Fails Gastro Exam (Michael DePeau-Wilson, MedPage Today, 5-22-23) Both versions of the AI model failed to achieve the 70% accuracy threshold to pass.
• Some companies are already replacing workers with ChatGPT, despite warnings it shouldn’t be relied on for ‘anything important’ (Trey Williams, Fortune, 2-25-23)
• ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (NY Times, 5-1-23) For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
• Teaching A.I. Systems to Behave Themselves (Cade Metz, NY Times, 8-13-17)
On the plus or minus side:
• Smarter health: How AI is transforming health care (Dorey Scheimer, Meghna Chakrabarti, and Tim Skoog, On Point, first piece in a Smarter Health series, WBUR radio, 5-27-22, with transcript) Guests Dr. Ziad Obermeyer (associate professor of health policy and management at the University of California, Berkeley School of Public Health. Emergency medicine physician) and Richard Sharp (director of the biomedical ethics research program at the Mayo Clinic, @MayoClinic) explore the potential of AI in health care — from predicting patient risk, to diagnostics, to just helping physicians make better decisions.
• Artificial Intelligence Is Primed to Disrupt Health Care Industry (Ben Hernandez, ETF Trends, 7-12-15) Artificial intelligence (AI) is one of the prime technologies leading the wave of disruption that is going on within the health care sector. Recent studies have shown that AI technology can outperform doctors when it comes to cancer screenings and disease diagnoses. In particular, this could mean specialists such as radiologists and pathologists could be replaced by AI technology. Whether society is ready for it or not, robotics, artificial intelligence (AI), machine learning, or any other type of disruptive technology will be the next wave of innovation.
• How will large language models (LLMs) change the world? (Dynomight Internet Newsletter, The Browser, 12-8-22) Think about historical analogies for 'large language models': the ice trade and freezers; chess humans and chess AIs; farmers and tractors; horses and railroads; swords and guns; swordfighting and fencing; artisanal goods and mass production; site-built homes and pre-manufactured homes; painting and photography; feet and Segways; gull-wing and scissor doors; sex and pornography; human calculators and electronic calculators.
---
• Artificial You: AI and the Future of Your Mind by Susan Schneider. Can robots really be conscious? Is the mind just a program? "Schneider offers sophisticated insights on what is perhaps the number one long-term challenge confronting humanity."―Martin Rees
• Top 9 ethical issues in artificial intelligence (Julia Bossmann, World Economic Forum, 10-21-16) In brief: unemployment, income inequality, humanity, artificial stupidity (mistakes), racist robots (AI bias), security (safety from adversaries), evil genies (unintended consequences), singularity, robot rights. She makes interesting points!
• AI in the workplace: Everything you need to know (Nick Heath, ZDNet, 6-29-18) How artificial intelligence will change the world of work, for better and for worse. Bots and virtual assistants, IoT and analytics, and so on.
• What is the IoT? Everything you need to know about the Internet of Things right now (Steve Ranger, ZDNet, 1-19-18) The Internet of Things explained: What the IoT is, and where it's going next. "Pretty much any physical object can be transformed into an IoT device if it can be connected to the internet and controlled that way. A lightbulb that can be switched on using a smartphone app is an IoT device, as is a motion sensor or a smart thermostat in your office or a connected streetlight. An IoT device could be as fluffy as a child's toy or as serious as a driverless truck, or as complicated as a jet engine that's now filled with thousands of sensors collecting and transmitting data. At an even bigger scale, smart cities projects are filling entire regions with sensors to help us understand and control the environment."
• Beyond the Hype of Machine Learning (Free download, GovLoop ebook, 15-minute read) Read about machine learning's impact in the public sector, the 'how' and 'why' of artificial intelligence (AI), and how the Energy Department covers the spectrum of AI usage.
• Can Artificial Intelligence Keep Your Home Secure? (Paul Sullivan, NY Times, 1-29-18) Security companies are hoping to harness the potential of A.I., promising better service at lower prices. But experts say there are risks.
• What will our society look like when Artificial Intelligence is everywhere? (Stephan Talty, Smithsonan, April 2018) Will robots become self-aware? Will they have rights? Will they be in charge? Here are five scenarios from our future dominated by AI.
• Amazon Is Latest Tech Giant to Face Staff Backlash Over Government Work (Jamie Condliffe, NY times, 6-22-18) Tech "firms have built artificial intelligence and cloud computing systems that governments find attractive. But as these companies take on lucrative contracts to furnish state and federal agencies with these technologies, they’re facing increasing pushback Read More